Graph learning problems are typically approached by focusing on learning the topology of a single graph when signals from all nodes are available. However, many contemporary setups involve multiple related networks and, moreover, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by this, we propose a joint graph learning method that takes into account the presence of hidden (latent) variables. Intuitively, the presence of the hidden nodes renders the inference task ill-posed and challenging to solve, so we overcome this detrimental influence by harnessing the similarity of the estimated graphs. To that end, we assume that the observed signals are drawn from a Gaussian Markov random field with latent variables and we carefully model the graph similarity among hidden (latent) nodes. Then, we exploit the structure resulting from the previous considerations to propose a convex optimization problem that solves the joint graph learning task by providing a regularized maximum likelihood estimator. Finally, we compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.
translated by 谷歌翻译
Predicting discrete events in time and space has many scientific applications, such as predicting hazardous earthquakes and outbreaks of infectious diseases. History-dependent spatio-temporal Hawkes processes are often used to mathematically model these point events. However, previous approaches have faced numerous challenges, particularly when attempting to forecast one or multiple future events. In this work, we propose a new neural architecture for multi-event forecasting of spatio-temporal point processes, utilizing transformers, augmented with normalizing flows and probabilistic layers. Our network makes batched predictions of complex history-dependent spatio-temporal distributions of future discrete events, achieving state-of-the-art performance on a variety of benchmark datasets including the South California Earthquakes, Citibike, Covid-19, and Hawkes synthetic pinwheel datasets. More generally, we illustrate how our network can be applied to any dataset of discrete events with associated markers, even when no underlying physics is known.
translated by 谷歌翻译
我们考虑了从节点观测值估算多个网络拓扑的问题,其中假定这些网络是从相同(未知)随机图模型中绘制的。我们采用图形作为我们的随机图模型,这是一个非参数模型,可以从中绘制出潜在不同大小的图形。图形子的多功能性使我们能够解决关节推理问题,即使对于要恢复的图形包含不同数量的节点并且缺乏整个图形的精确比对的情况。我们的解决方案是基于将最大似然惩罚与Graphon估计方案结合在一起,可用于增强现有网络推理方法。通过引入嘈杂图抽样信息的强大方法,进一步增强了所提出的联合网络和图形估计。我们通过将其性能与合成和实际数据集中的竞争方法进行比较来验证我们提出的方法。
translated by 谷歌翻译
我们研究了p-laplacians和光谱聚类,以融合了边缘依赖性顶点权重(EDVW)的最近提出的超图模型。这些权重可以反映在超边缘内顶点的不同重要性,从而赋予超图模型更高的表达性和灵活性。通过构建基于EDVWS的基于EDVWS的分裂函数,我们将具有EDVW的超图转换为频谱理论更好地开发的谱图。这样,现有的概念和定理,例如P-Laplacians和Subsodular HyperGraph设置下提出的P-Laplacians和Cheeger不平等现象,可以直接扩展到具有EDVW的超图。对于具有基于EDVWS的拆分功能的子管道超图,我们提出了一种有效的算法来计算与1-Laplacian的第二小特征值相关的特征向量。然后,我们利用此特征向量来聚类顶点,比基于2-Laplacian的传统光谱聚类获得更高的聚类精度。从更广泛的角度来看,所提出的算法适用于所有可降低图的亚物种超图。使用现实世界数据的数值实验证明了基于1-Laplacian和EDVW的光谱聚类的有效性。
translated by 谷歌翻译
我们提出了一种数据驱动的电力分配方法,在联邦学习(FL)上的受干扰有限的无线网络中的电力分配。功率策略旨在在通信约束下的流行过程中最大化传输的信息,具有提高全局流动模型的训练精度和效率的最终目标。所提出的功率分配策略使用图形卷积网络进行参数化,并且通过引流 - 双算法解决了相关的约束优化问题。数值实验表明,所提出的方法在传输成功率和流动性能方面优于三种基线方法。
translated by 谷歌翻译
功率分配是无线网络中的基本问题之一,并且各种算法从不同的角度来解决这个问题。这些算法中的一个共同元素是它们依赖于信道状态的估计,这可能因硬件缺陷,嘈杂的反馈系统和环境和对抗性障碍而不准确。因此,对于输入扰动,这些算法的输出功率分配至关重要,在输入扰动的范围内是界限的界限的界限的程度。在本文中,我们专注于UWMMSE - 一种利用图形神经网络的现代算法 - 并通过理论分析和经验验证说明了界限能量添加输入扰动的稳定性。
translated by 谷歌翻译
来自节点观测集的学习图表代表了一个正式称为图形拓扑推断的突出问题。然而,当前方法通过通常关注推断的单个网络而受到限制,并且他们假设来自所有节点的观察。首先,许多当代设置涉及多个相关网络,而第二个,其次,通常只是观察到剩余剩余隐藏的节点子集的情况。通过这些事实的动机,我们介绍了一种联合图拓扑推理方法,用于模拟隐藏变量的影响。在所观察到的信号在寻求的图表和图表密切相关的假设下,多个网络的联合估计允许我们利用这种关系来提高学习图的质量。此外,我们面临建模隐藏节点影响以最大限度地减少其不利影响的挑战性问题。为了获得可编程方法,我们利用手头的设置的特定结构,并利用不同图之间的相似性,这影响了观察到的和隐藏节点。为了测试所提出的方法,提供了综合和实际图的数值模拟。
translated by 谷歌翻译
一组广泛建立的无监督节点嵌入方法可以解释为由两个独特的步骤组成:i)基于兴趣图的相似性矩阵的定义,然后是II)ii)该矩阵的明确或隐式因素化。受这个观点的启发,我们提出了框架的两个步骤的改进。一方面,我们建议根据自由能距离编码节点相似性,该自由能距离在最短路径和通勤时间距离之间进行了插值,从而提供了额外的灵活性。另一方面,我们根据损耗函数提出了一种基质分解方法,该方法将Skip-Gram模型的损失函数推广到任意相似性矩阵。与基于广泛使用的$ \ ell_2 $损失的因素化相比,该方法可以更好地保留与较高相似性分数相关的节点对。此外,它可以使用高级自动分化工具包轻松实现,并通过利用GPU资源进行有效计算。在现实世界数据集上的节点聚类,节点分类和链接预测实验证明了与最先进的替代方案相比,合并基于自由能的相似性以及所提出的矩阵分解的有效性。
translated by 谷歌翻译
Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction attacks. Inspired by the success of games (i.e., probabilistic experiments) to study security properties in cryptography, some authors describe privacy inference risks in machine learning using a similar game-based style. However, adversary capabilities and goals are often stated in subtly different ways from one presentation to the other, which makes it hard to relate and compose results. In this paper, we present a game-based framework to systematize the body of knowledge on privacy inference risks in machine learning.
translated by 谷歌翻译
The emergence of large pretrained models has enabled language models to achieve superior performance in common NLP tasks, including language modeling and question answering, compared to previous static word representation methods. Augmenting these models with a retriever to retrieve the related text and documents as supporting information has shown promise in effectively solving NLP problems in a more interpretable way given that the additional knowledge is injected explicitly rather than being captured in the models' parameters. In spite of the recent progress, our analysis on retriever-augmented language models shows that this class of language models still lack reasoning over the retrieved documents. In this paper, we study the strengths and weaknesses of different retriever-augmented language models such as REALM, kNN-LM, FiD, ATLAS, and Flan-T5 in reasoning over the selected documents in different tasks. In particular, we analyze the reasoning failures of each of these models and study how the models' failures in reasoning are rooted in the retriever module as well as the language model.
translated by 谷歌翻译